83 research outputs found

    Android Permissions Remystified: A Field Study on Contextual Integrity

    Full text link
    Due to the amount of data that smartphone applications can potentially access, platforms enforce permission systems that allow users to regulate how applications access protected resources. If users are asked to make security decisions too frequently and in benign situations, they may become habituated and approve all future requests without regard for the consequences. If they are asked to make too few security decisions, they may become concerned that the platform is revealing too much sensitive information. To explore this tradeoff, we instrumented the Android platform to collect data regarding how often and under what circumstances smartphone applications are accessing protected resources regulated by permissions. We performed a 36-person field study to explore the notion of "contextual integrity," that is, how often are applications accessing protected resources when users are not expecting it? Based on our collection of 27 million data points and exit interviews with participants, we examine the situations in which users would like the ability to deny applications access to protected resources. We found out that at least 80% of our participants would have preferred to prevent at least one permission request, and overall, they thought that over a third of requests were invasive and desired a mechanism to block them

    The Feasibility of Dynamically Granted Permissions: Aligning Mobile Privacy with User Preferences

    Full text link
    Current smartphone operating systems regulate application permissions by prompting users on an ask-on-first-use basis. Prior research has shown that this method is ineffective because it fails to account for context: the circumstances under which an application first requests access to data may be vastly different than the circumstances under which it subsequently requests access. We performed a longitudinal 131-person field study to analyze the contextuality behind user privacy decisions to regulate access to sensitive resources. We built a classifier to make privacy decisions on the user's behalf by detecting when context has changed and, when necessary, inferring privacy preferences based on the user's past decisions and behavior. Our goal is to automatically grant appropriate resource requests without further user intervention, deny inappropriate requests, and only prompt the user when the system is uncertain of the user's preferences. We show that our approach can accurately predict users' privacy decisions 96.8% of the time, which is a four-fold reduction in error rate compared to current systems.Comment: 17 pages, 4 figure

    To authorize or not authorize: helping users review access policies in organizations

    Get PDF
    ABSTRACT This work addresses the problem of reviewing complex access policies in an organizational context using two studies. In the first study, we used semi-structured interviews to explore the access review activity and identify its challenges. The interviews revealed that access review involves challenges such as scale, technical complexity, the frequency of reviews, human errors, and exceptional cases. We also modeled access review in the activity theory framework. The model shows that access review requires an understanding of the activity context including information about the users, their job, their access rights, and the history of access policy. We then used activity theory guidelines to design a new user interface named AuthzMap. We conducted an exploratory user study with 340 participants to compare the use of AuthzMap with two existing commercial systems for access review. The results show that AuthzMap improved the efficiency of access review in 5 of the 7 tested scenarios, compared to the existing systems. AuthzMap also improved accuracy of actions in one of the 7 tasks, and only negatively affected accuracy in one of the tasks

    Guidelines for designing IT security management tools

    Full text link
    An important factor that impacts the effectiveness of secu-rity systems within an organization is the usability of secu-rity management tools. In this paper, we present a survey of design guidelines for such tools. We gathered guidelines and recommendations related to IT security management tools from the literature as well as from our own prior studies of IT security management. We categorized and combined these into a set of high level guidelines and identified the relationships between the guidelines and challenges in IT security management. We also illustrated the need for the guidelines, where possible, with quotes from additional in-terviews with five security practitioners. Our framework of guidelines can be used by those developing IT security tools, as well as by practitioners and managers evaluating tools

    Security Engineering for Large Scale Distributed Applications

    No full text
    Copyright © 2002-2004 Konstantin Beznosovairplanes vs. cars � flying is fast � driving is slow � why isn’t everybody flying? 2 why aren't secure systems almost completely insecure, or “secure ” but � too expensive and error-prone to build � too complex to administer � inadequate for real-world problems � forever everywhere? examples 3 � CORBA Security examples • no compliant system • over 600 pages • 3 days to install and configure a toy set up � Web services security • harder than RPC-based CORBA 4 � research direction outline � access control mechanisms overview � some things that can be done about it � some specific things: attribute function, composable policy engines � other research projects 5 what can be done about it? improvements towards • inexpensive and error-proof to build • effective and inexpensive in administration • adequate for problem domains • easy and inexpensive to change and integrat
    corecore